Goto

Collaborating Authors

 AAAI AI-Alert for Jul 21, 2020


How NASA Built a Self-Driving Car for Its Next Mars Mission

WIRED

Later this month, NASA is expected to launch its latest Mars rover, Perseverance, on a first-of-its-kind mission to the Red Planet. Its job is to collect and store geological samples so they can eventually be returned to Earth. Perseverance will spend its days poking the Jezero Crater, an ancient Martian river delta, and the samples it collects may contain the first evidence of extraterrestrial life. But first it has to find them. For that, it needs some damn good computers--at least by Martian standards.


OpenAI's new language generator GPT-3 is shockingly good--and completely mindless

MIT Technology Review

And a tool like this has many new uses, both good (from powering better chatbots to helping people code) and bad (from powering better misinformation bots to helping kids cheat on their homework). But when a new AI milestone comes along it too often gets buried in hype. Even Sam Altman, who co-founded OpenAI with Elon Musk, tried to tone things down: "The GPT-3 hype is way too much. It's impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out."


New Zealand: New volcano alert system 'could have warned of White Island eruption'

BBC News

New Zealand scientists have invented a new volcano alert system that they say could have provided warning ahead of last year's White Island disaster. Twenty-one people died when the country's most active volcano, also called Whakaari, suddenly erupted last December with tourists on it. The new system uses machine learning algorithms to analyse real-time data to predict future eruptions. The research was publish in the journal Nature last week. One of the scientists involved in the project, Shane Cronin from the University of Auckland, told the BBC the current system had been "too slow to provide warnings for people [on] the island." "The current [alert system] collects data in real-time but what tends to happen is that this information gets assessed by a panel and they have an expert process... this all takes a while," he said.


Researchers warn court ruling could have a chilling effect on adversarial machine learning

#artificialintelligence

A cross-disciplinary team of machine learning, security, policy, and law experts say inconsistent court interpretations of an anti-hacking law have a chilling effect on adversarial machine learning security research and cybersecurity. At question is a portion of the Computer Fraud and Abuse Act (CFAA). A ruling to decide how part of the law is interpreted could shape the future of cybersecurity and adversarial machine learning. If the U.S. Supreme Court takes up an appeal case based on CFAA next year, researchers predict that the court will ultimately choose a narrow definition of the clause related to "exceed authorized access" instead of siding with circuit courts who have taken a broad definition of the law. One circuit court ruling on the subject concluded that a broad view would turn millions of people into unsuspecting criminals.


Predictive policing algorithms are racist. They need to be dismantled.

#artificialintelligence

Yeshimabeit Milner was in high school the first time she saw kids she knew getting handcuffed and stuffed into police cars. It was February 29, 2008, and the principal of a nearby school in Miami, with a majority Haitian and African-American population, had put one of his students in a chokehold. The next day several dozen kids staged a peaceful demonstration. That night, Miami's NBC 6 News at Six kicked off with a segment called "Chaos on Campus." Cut to blurry phone footage of screaming teenagers: "The chaos you see is an all-out brawl inside the school's cafeteria." Students told reporters that police hit them with batons, threw them on the floor, and pushed them up against walls. The police claimed they were the ones getting attacked--"with water bottles, soda pops, milk, and so on"--and called for emergency backup. Around 25 students were arrested, and many were charged with multiple crimes, including resisting arrest with violence.


A new tool translates 4000-year old stories using machine learning

#artificialintelligence

Ancient Egyptians used hieroglyphs over four millennia ago to engrave and record their stories. Today, only a select group of people know how to read or interpret those inscriptions. To read and decipher the ancient hieroglyphic writing, researchers and scholars have been using the Rosetta Stone, an irregularly shaped black granite stone. In 2017, game developer Ubisoft launched an initiative to use AI and machine learning to understand the written language of the Pharoahs. The initiative brought researchers from Australia's Macquarie University and Google's Art and Culture division togther.


How valuable is your AI trust currency?

#artificialintelligence

AI is proving its ability to find patterns hidden within troves of data, accelerate decisions and predictions based in fact, and save us time, energy, and money. Yet, even with recent advancements and investments, organizations with AI in production are still the exception rather than the rule. In a world hampered by the need for instant gratification, end users and stakeholders quickly become skeptics when an AI pilot fails or a high-profile AI implementation results in unintended consequences. The psychological implications caused by doubt and second guessing can quickly lead to discouragement and distrust of good solution ideas. If trust is the currency of business and life, how will a digital-first world and increased reliance on machine learning technologies affect your AI trust currency balance?


Are Clogged Blood Vessels the Key to Treating Alzheimer's Disease?

Discover - Top Stories

Citizen Science Salon is a partnership between Discover and SciStarter.org. In 2016, a team of Alzheimer's disease researchers at Cornell University hit a dead end. The scientists were studying mice, looking for links between Alzheimer's and blood flow changes in the brain. For years, scientists have known that reduced blood flow in the brain is a symptom of Alzheimer's disease. More recent research has also shown that this reduced blood flow can be caused by clogged blood vessels -- or "stalls." And by reversing these stalls in mice, scientists were able to restore their memory.


A New Gadget Stops Voice Assistants From Snooping on You

WIRED

As the popularity of Amazon Alexa and other voice assistants grows, so too does the number of ways those assistants both do and can intrude on users' privacy. Examples include hacks that use lasers to surreptitiously unlock connected-doors and start cars, malicious assistant apps that eavesdrop and phish passwords, and discussions that are surreptitiously and routinely monitored by provider employees or are subpoenaed for use in criminal trials. Now, researchers have developed a device that may one day allow users to take back their privacy by warning when these devices are mistakenly or intentionally snooping on nearby people. This story originally appeared on Ars Technica, a trusted source for technology news, tech policy analysis, reviews, and more. Ars is owned by WIRED's parent company, Condé Nast.


Patients aren't being told about the AI systems advising their care

#artificialintelligence

Since February of last year, tens of thousands of patients hospitalized at one of Minnesota's largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model. But few if any of those patients has any idea about the AI involved in their care. That's because frontline clinicians at M Health Fairview generally don't mention the AI whirring behind the scenes in their conversations with patients. At a growing number of prominent hospitals and clinics around the country, clinicians are turning to AI-powered decision support tools -- many of them unproven -- to help predict whether hospitalized patients are likely to develop complications or deteriorate, whether they're at risk of readmission, and whether they're likely to die soon. But these patients and their family members are often not informed about or asked to consent to the use of these tools in their care, a STAT examination has found.